Based on the visualized growth curves, we understand that the long-term, functional growth rate is the critical factor for an algorithm's scalability. We formalize this essential focus on the upper limit of growth using Big-O Notation.

  • Big-O Notation, expressed mathematically as $f(n) = O(g(n))$, is the standard method for describing the asymptotic upper bound of an algorithm's runtime cost $f(n)$ as the input size $n$ grows toward infinity.
  • Defining the Upper Bound (Worst-Case): Big-O always represents the worst-case scenario—the maximum time an algorithm might take for a given input size $n$. This ensures the complexity analysis is a reliable guarantee of performance.
  • The Power of Asymptotic Analysis: The core principle of Big-O is that it only focuses on the dominating term $g(n)$. This term determines the shape of the growth curve for large inputs, and it allows us to ignore noise.
    Rule: If $f(n)$ is the exact runtime, we find the simplest function $g(n)$ such that $f(n)$ is ultimately bounded above by a constant multiple of $g(n)$. $$ f(n) \le c \cdot g(n) \text{ for all } n > n_0 $$
    For example, if an algorithm's exact cost is $f(n) = 10n + 5 \cdot \log2(n) + 100$:
    • We ignore the constant $100$ and the slower growing term $5 \cdot \log2(n)$.
    • We ignore the constant multiplier $10$.
    • The complexity is classified as $O(n)$ (Linear time).

Key Concepts of Big-O

Big-O provides a formal way to classify algorithms by their worst-case, upper-bound performance. It focuses only on the dominant term that dictates growth for large inputs, ignoring constants and lower-order terms to simplify analysis and comparison.